Unsupervised pre-training on millions of digital-born or scanned documents has shown promising advances in visual document understanding~(VDU). While various vision-language pre-training objectives are studied in existing solutions, the document textline, as an intrinsic granularity in VDU, has seldom been explored so far. A document textline usually contains words that are spatially and semantically correlated, which can be easily obtained from OCR engines. In this paper, we propose Wukong-Reader, trained with new pre-training objectives to leverage the structural knowledge nested in document textlines. We introduce textline-region contrastive learning to achieve fine-grained alignment between the visual regions and texts of document textlines. Furthermore, masked region modeling and textline-grid matching are also designed to enhance the visual and layout representations of textlines. Experiments show that our Wukong-Reader has superior performance on various VDU tasks such as information extraction. The fine-grained alignment over textlines also empowers Wukong-Reader with promising localization ability.
translated by 谷歌翻译
诊断阿尔茨海默病(AD)的早期阶段(AD)对于及时治疗至关重要以缓慢进一步恶化。可视化广告早期阶段的形态特征是巨大的临床价值。在这项工作中,提出了一种新的多向感知生成的对抗网络(MP-GaN)来可视化表明不同阶段患者的广告严重程度的形态特征。具体地,通过将​​新的多向映射机制引入模型中,所提出的MP-GaN可以有效地捕获突出全局特征。因此,通过利用来自发电机的类别辨别图,所提出的模型可以通过源域和预定义目标域之间的MR图像变换清楚地描绘微妙的病变。此外,通过集成对抗性损失,分类损失,周期一致性损失和\ emph {l} 1惩罚,MP-GaN中的单个发电机可以学习多类的类鉴别映射。对阿尔茨海默病神经影像倡议(ADNI)数据集进行了广泛的实验结果表明,与现有方法相比,MP-GAN实现了卓越的性能。由MP-GaN可视化的病变也与临床医人观察到的一致。
translated by 谷歌翻译
我们建议一个基于深入强化学习的经理工作框架,以解决旅行推销员问题(TSP)的艰难而又非平凡的变体,\ ie〜有时间窗口和拒绝(MTSPTWR)的多车辆TSP(MTSPTWR),在此之前无法服务的客户截止日期将受到拒绝。特别是,在拟议的框架中,经理代理人通过基于图形同构网络(GIN)的策略网络将客户分配给每辆车,从而将MTSPTWR分为子路由任务。工人代理人通过根据每辆车的旅行长度和拒绝率来最大程度地降低成本来解决子路由任务,然后将其最多的最大值送回经理代理以学习更好的任务。实验结果表明,所提出的框架在更高的解决方案质量和较短的计算时间方面优于强基础。更重要的是,训练有素的代理商还取得了竞争性能,以解决看不见的较大实例。
translated by 谷歌翻译
我们提出了一种有效的神经邻域搜索(N2S),以解决取货和交付问题(PDPS)。具体而言,我们设计了强大的综合注意力,可以使香草自我注意力综合有关路线解决方案的各种特征。我们还利用了两个自定义的解码器,它们会自动学习执行拾取节点对的删除和重新插入以应对优先限制。此外,利用多样性增强方案以进一步改善性能。我们的N2是通用的,并且对两个规范PDP变体进行了广泛的实验表明,它可以在现有神经方法之间产生最新的结果。此外,它甚至超过了众所周知的LKH3求解器在更受限的PDP变体上。我们针对N2S的实施可在线获得。
translated by 谷歌翻译
最近,变压器已成为解决车辆路由问题(VRP)的盛行深度建筑。但是,它在学习VRP的学习改进模型方面的有效性较小,因为其位置编码(PE)方法不适合表示VRP解决方案。本文介绍了一种新颖的双重协作变压器(DACT),以分别学习节点和位置特征的嵌入,而不是像现有的那样将它们融合在一起,以避免潜在的噪音和不相容的相关性。此外,位置特征通过新型的循环位置编码(CPE)方法嵌入,以使变压器有效捕获VRP溶液(即环状序列)的圆形性和对称性。我们使用近端政策优化训练DACT,并设计一种课程学习策略,以提高样本效率。我们应用DACT来解决旅行推销员问题(TSP)和电容的车辆路由问题(CVRP)。结果表明,我们的DACT优于现有的基于变压器的改进模型,并且在合成和基准实例上分别在不同问题大小上表现出更好的概括性能。
translated by 谷歌翻译
回溯搜索算法通常用于解决约束满足问题(CSP)。回溯搜索的效率在很大程度上取决于可变排序启发式。目前,最常用的启发式是根据专家知识进行手工制作的。在本文中,我们提出了一种基于深度的加强学习方法,可以自动发现新的变量订购启发式,更好地适用于给定类CSP实例。我们显示,直接优化搜索成本很难用于自动启动,并建议优化在搜索树中到达叶节点的预期成本。为了捕获变量和约束之间的复杂关系,我们设计基于图形神经网络的表示方案,可以处理具有不同大小和约束的CSP实例。随机CSP实例上的实验结果表明,学习的政策在最小化搜索树大小的方面优于古典手工制作的启发式,并且可以有效地推广到比训练中使用的实例。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译